55 research outputs found

    GESPAR: Efficient Phase Retrieval of Sparse Signals

    Full text link
    We consider the problem of phase retrieval, namely, recovery of a signal from the magnitude of its Fourier transform, or of any other linear transform. Due to the loss of the Fourier phase information, this problem is ill-posed. Therefore, prior information on the signal is needed in order to enable its recovery. In this work we consider the case in which the signal is known to be sparse, i.e., it consists of a small number of nonzero elements in an appropriate basis. We propose a fast local search method for recovering a sparse signal from measurements of its Fourier transform (or other linear transform) magnitude which we refer to as GESPAR: GrEedy Sparse PhAse Retrieval. Our algorithm does not require matrix lifting, unlike previous approaches, and therefore is potentially suitable for large scale problems such as images. Simulation results indicate that GESPAR is fast and more accurate than existing techniques in a variety of settings.Comment: Generalized to non-Fourier measurements, added 2D simulations, and a theorem for convergence to stationary poin

    Phase Retrieval with Application to Optical Imaging

    Get PDF
    This review article provides a contemporary overview of phase retrieval in optical imaging, linking the relevant optical physics to the information processing methods and algorithms. Its purpose is to describe the current state of the art in this area, identify challenges, and suggest vision and areas where signal processing methods can have a large impact on optical imaging and on the world of imaging at large, with applications in a variety of fields ranging from biology and chemistry to physics and engineering

    Sparsity based sub-wavelength imaging with partially incoherent light via quadratic compressed sensing

    Full text link
    We demonstrate that sub-wavelength optical images borne on partially-spatially-incoherent light can be recovered, from their far-field or from the blurred image, given the prior knowledge that the image is sparse, and only that. The reconstruction method relies on the recently demonstrated sparsity-based sub-wavelength imaging. However, for partially-spatially-incoherent light, the relation between the measurements and the image is quadratic, yielding non-convex measurement equations that do not conform to previously used techniques. Consequently, we demonstrate new algorithmic methodology, referred to as quadratic compressed sensing, which can be applied to a range of other problems involving information recovery from partial correlation measurements, including when the correlation function has local dependencies. Specifically for microscopy, this method can be readily extended to white light microscopes with the additional knowledge of the light source spectrum.Comment: 16 page

    Deep-STORM: super-resolution single-molecule microscopy by deep learning

    Full text link
    We present an ultra-fast, precise, parameter-free method, which we term Deep-STORM, for obtaining super-resolution images from stochastically-blinking emitters, such as fluorescent molecules used for localization microscopy. Deep-STORM uses a deep convolutional neural network that can be trained on simulated data or experimental measurements, both of which are demonstrated. The method achieves state-of-the-art resolution under challenging signal-to-noise conditions and high emitter densities, and is significantly faster than existing approaches. Additionally, no prior information on the shape of the underlying structure is required, making the method applicable to any blinking data-set. We validate our approach by super-resolution image reconstruction of simulated and experimentally obtained data.Comment: 7 pages, added code download reference and DOI for the journal versio

    DeepSTORM3D: dense three dimensional localization microscopy and point spread function design by deep learning

    Full text link
    Localization microscopy is an imaging technique in which the positions of individual nanoscale point emitters (e.g. fluorescent molecules) are determined at high precision from their images. This is the key ingredient in single/multiple-particle-tracking and several super-resolution microscopy approaches. Localization in three-dimensions (3D) can be performed by modifying the image that a point-source creates on the camera, namely, the point-spread function (PSF). The PSF is engineered using additional optical elements to vary distinctively with the depth of the point-source. However, localizing multiple adjacent emitters in 3D poses a significant algorithmic challenge, due to the lateral overlap of their PSFs. Here, we train a neural network to receive an image containing densely overlapping PSFs of multiple emitters over a large axial range and output a list of their 3D positions. Furthermore, we then use the network to design the optimal PSF for the multi-emitter case. We demonstrate our approach numerically as well as experimentally by 3D STORM imaging of mitochondria, and volumetric imaging of dozens of fluorescently-labeled telomeres occupying a mammalian nucleus in a single snapshot.Comment: main text: 9 pages, 5 figures, supplementary information: 29 pages, 20 figure
    • …
    corecore